11 research outputs found

    The use of datasets of bad quality images to define fundus image quality

    Get PDF
    Screening programs for sight-threatening diseases rely on the grading of a large number of digital retinal images. As automatic image grading technology evolves, there emerges a need to provide a rigorous definition of image quality with reference to the grading task. In this work, on two subsets of the CORD database of clinically gradable and matching non-gradable digital retinal images, a feature set based on statistical and on task-specific morphological features has been identified. A machine learning technique has then been demonstrated to classify the images as per their clinical gradeability, offering a proxy for a rigorous definition of image quality

    An imaging-based autorefractor

    Get PDF
    Autorefraction consists of the automatic sensing of three parameters - spherical error, cylindrical error, slope of the principal meridian - that describe the deviation of the focusing properties of an ametropic eye with respect to an emmetropic state. Low-cost autorefractors would be highly desirable in resource-poor settings for the stratification of patients between those who can be treated in the community and those who need to be referred to specialist care. In the present paper, we describe the implementation of an autorefractor based on projecting patterns onto the retina of an eye and observing the projected pattern through an ophthalmoscopic camera configuration coaxial with the projection path. Tunable optics in the coaxial path, combined with appropriate image processing, allows determination of the three parameters. The simplicity and performance of the setup, measured on an eye simulator, shows promise towards clinical use in the community. Further work is needed to confirm the performance in vivo

    Estimation of Maximum Shoulder and Elbow Joint Torques Based on Demographics and Anthropometrics

    Get PDF
    Repetitive movements that involve a significant shift of the body's center of mass can lead to shoulder and elbow fatigue, which are linked to injury and musculoskeletal disorders if not addressed in time. Research has been conducted on the joint torque individuals can produce, a quantity that indicates the ability of the person to carry out such repetitive movements. Most of the studies surround gait analysis, rehabilitation, the assessment of athletic performance, and robotics. The aim of this study is to develop a model that estimates the maximum shoulder and elbow joint torque an individual can produce based on anthropometrics and demographics without taking a manual measurement with a force gauge (dynamometer). Nineteen subjects took part in the study which recorded maximum shoulder and elbow joint torques using a dynamometer. Sex, age, body composition parameters, and anthropometric data were recorded, and relevant parameters which significantly contributed to joint torque were identified using regression techniques. Of the parameters measured, body mass index and upper forearm volume predominantly contribute to maximum torque for shoulder and elbow joints; coefficient of determination values were between 0.6 and 0.7 for the independent variables and were significant for maximum shoulder joint torque (P<0.001) and maximum elbow joint torque (P<0.005) models. Two expressions illustrated the impact of the relevant independent variables on maximum shoulder joint torque and maximum elbow joint torque, using multiple linear regression. Coefficient of determination values for the models were between 0.6 and 0.7. The models developed enable joint torque estimation for individuals using measurements that are quick and easy to acquire, without the use of a dynamometer. This information is useful for those employing joint torque data in biomechanics in the areas of health, rehabilitation, ergonomics, occupational safety, and robotics. Clinical Relevance - The rapid estimation of arm joint torque without the direct force measurement can help occupational safety with the prevention of injury and musculoskeletal disorders in several working scenarios

    Validation of Endurance Model for Manual Tasks*

    Get PDF
    Physical fatigue in the workplace can lead to work-related musculoskeletal disorders (WMSDs), especially in occupations that require repetitive, mid-air movements, such as manufacturing and assembly tasks in industry settings. The current paper endeavors to validate an existing torque-based fatigue prediction model for lifting tasks. The model uses anthropometrics and the maximum torque of the individual to predict the time to fatigue. Twelve participants took part in the study which measured body composition parameters and the maximum force produced by the shoulder joint in flexion, followed by three lifting tasks for the shoulder in flexion, including isometric and dynamic tasks with one and two hands. Inertial measurements units (IMUs) were worn by participants to determine the torque at each instant to calculate the endurance time and CE, while a self-subjective questionnaire was utilized to assess physical exertion, the Borg Rate of Perceived Exertion (RPE) scale. The model was effective for static and two-handed tasks and produced errors in the range of [28.62 49.21] for the last task completed, indicating the previous workloads affect the endurance time, even though the individual perceives they are fully rested. The model was not effective for the one-handed dynamic task and differences were observed between males and females, which will be the focus of future work.An individualized, torque-based fatigue prediction model, such as the model presented, can be used to design worker-specific target levels and workloads, take inter and intra individual differences into account, and put fatigue mitigating interventions into place before fatigue occurs; resulting in potentially preventing WMSDs, aiding in worker wellbeing and benefitting the quality and efficiency of the work output.Clinical Relevance— This research provides the basis for an individualized, torque-based approach to the prediction of fatigue at the shoulder joint which can be used to assign worker tasks and rest breaks, design worker specific targets and reduce the prevalence of work-related musculoskeletal disorders in occupational settings

    Motion capture technology in industrial applications: A systematic review

    Get PDF
    The rapid technological advancements of Industry 4.0 have opened up new vectors for novel industrial processes that require advanced sensing solutions for their realization. Motion capture (MoCap) sensors, such as visual cameras and inertial measurement units (IMUs), are frequently adopted in industrial settings to support solutions in robotics, additive manufacturing, teleworking and human safety. This review synthesizes and evaluates studies investigating the use of MoCap technologies in industry-related research. A search was performed in the Embase, Scopus, Web of Science and Google Scholar. Only studies in English, from 2015 onwards, on primary and secondary industrial applications were considered. The quality of the articles was appraised with the AXIS tool. Studies were categorized based on type of used sensors, beneficiary industry sector, and type of application. Study characteristics, key methods and findings were also summarized. In total, 1682 records were identified, and 59 were included in this review. Twenty-one and 38 studies were assessed as being prone to medium and low risks of bias, respectively. Camera-based sensors and IMUs were used in 40% and 70% of the studies, respectively. Construction (30.5%), robotics (15.3%) and automotive (10.2%) were the most researched industry sectors, whilst health and safety (64.4%) and the improvement of industrial processes or products (17%) were the most targeted applications. Inertial sensors were the first choice for industrial MoCap applications. Camera-based MoCap systems performed better in robotic applications, but camera obstructions caused by workers and machinery was the most challenging issue. Advancements in machine learning algorithms have been shown to increase the capabilities of MoCap systems in applications such as activity and fatigue detection as well as tool condition monitoring and object recognition

    A review of feature-based retinal image analysis

    Get PDF
    Retinal imaging is a fundamental tool in ophthalmic diagnostics. The potential use of retinal imaging within screening programs, with consequent need to analyze large numbers of images with high throughput, is pushing the digital image analysis field to find new solutions for the extraction of specific information from the retinal image. The aim of this review is to explore the latest progress in image processing techniques able to recognize specific retinal image features. and potential features of disease. In particular, this review aims to describe publically available retinal image databases, highlight different performance evaluators commonly used within the field, outline current approaches in feature-based retinal image analysis, and to map related trends. This review found two key areas to be addressed for the future development of automatic retinal image analysis: fundus image quality and the affect image processing may impose on relevant clinical information within the images. Performance evaluators of the algorithms reviewed are very promising, however absolute values are difficult to interpret when validating system suitability for use within clinical practice

    Synthetic stereo images of the optic disc from the CORD dataset

    Get PDF
    Advancement of techniques for 3D reconstruction of the optic disc could lead to affordable objective detection of glaucoma. Applying computer stereo vision techniques to image pairs is particularly promising. More data, along with the stereo camera calibration parameters and ground truths required for validation, could aid development. This work presents a method to generate, using a virtual environment, synthetic stereo images of optic discs from images in the CORD database and obtain the corresponding stereo camera calibration parameters and ground truths. Our own reconstruction technique was tested using data created using this environment and quantitatively validated

    Feasibility analysis about an Electrical Impedance Tomography (EIT) wearable system

    No full text
    The thesis concerns the fasibility about the electrical impedance tomography (EIT) technique implemented through a device for health monitoring, the BodyGateWay (BGW), developed by STMicroelectronics at Agrate Brianza (MB). EIT, in clinical practice, is used to provide tomographic images in a non-invasive way, by electrical measurements made from a series of electrodes on the surface of the region of interest. Nowadays the most promising applications of EIT are monitoring of patients with complications related to the respiratory system and surgery assistance for intra-surgical imaging. BGW is a wireless wearable device for continuous remote monitoring of ECG and bio-impedance signals. It is equipped with an optimized front-end and low-power system management. The challenge of being able to implement additional functionality without drastically changing the architecture of the device have pushed the development of this work. The first phase of the project involved the analysis of algorithms and mathematical structures developed in MATLAB environment within the EIDORS project for the solution of direct and inverse problem related to EIT. Optimal parameters, regarding methods of acquisition and matrices for conditioning of ill-posed problem (a-priori information about conductivity distribution, noise, geometrical factors) have been identified. These operations were performed through COMSOL simulation on complex anatomical models, demonstrating the feasibility of EIT image techniques reconstruction in ideal case. The next stage involved hardware and software design of an electronic device, to be interfaced to the BGW, for automatic selection of electrodes pairs for current injection and bio-impedance acquisition. The whole system was tested on a phantom which replicated the typical conductivity of the tissues of the thoracic region. The results led to a further optimization obtaining image quality comparable to those presented in literature. The final tests, conducted on healthy subjects, have produced promising preliminary results, highlighting the potential offered by the implementation of this imaging methodology in a low-power device designed for continuous patient monitoring

    Artifact removal in digital retinal images

    No full text
    Globally 2.2 million people are visually impaired and, of these, approximately 1 million present forms of visual impairment that could be addressed or prevented. Retinal imaging is a key step in the diagnosis and follow-up of major causes of visual impairment. As much as 20% of retinal images collected in the population are affected by artifacts, that render them ungradable both by expert graders and by the more recent automatic grading systems.;This work aims to develop an artifact removal strategy able to improve the effectiveness of retinal image grading, in particular for retinal feature segmentation. First, a large group of statistical parameters designed to measure image quality have been selected from the literature. A new ophthalmic database was then collected (CORD - the Comprehensive Ophthalmic Research Database), which includes retinal images with and without artifacts.;A mathematical model describing artifacts on the basis of the interaction of the light with the eye during eye photography was then developed. CORD and the mathematical model were then used to train a binary classifier to distinguish pixels affected by distortions within the image without the need for interpretive knowledge of the image itself and, on the basis of this, to establish a validation criterion for quality improvement in retinal images. Finally, an algorithm was developed to isolate in retinal images the regions affected by artifacts, and to subtract from the images the additive contributions to the distortion.;The artifact clean-up has been shown to increase the textural information of the retinal images, by improving vessel segmentation by more than 10%. By avoiding the use of interpretative elements of the image, this improvement in the quality of retinal images is agnostic to specific disease processes, and thus potentially applicable to population screening. Further work is necessary to improve the cosmetic quality of the images, to optimise the artifact removal strategy, and to relate the feature extraction improvement to clinical performance.Globally 2.2 million people are visually impaired and, of these, approximately 1 million present forms of visual impairment that could be addressed or prevented. Retinal imaging is a key step in the diagnosis and follow-up of major causes of visual impairment. As much as 20% of retinal images collected in the population are affected by artifacts, that render them ungradable both by expert graders and by the more recent automatic grading systems.;This work aims to develop an artifact removal strategy able to improve the effectiveness of retinal image grading, in particular for retinal feature segmentation. First, a large group of statistical parameters designed to measure image quality have been selected from the literature. A new ophthalmic database was then collected (CORD - the Comprehensive Ophthalmic Research Database), which includes retinal images with and without artifacts.;A mathematical model describing artifacts on the basis of the interaction of the light with the eye during eye photography was then developed. CORD and the mathematical model were then used to train a binary classifier to distinguish pixels affected by distortions within the image without the need for interpretive knowledge of the image itself and, on the basis of this, to establish a validation criterion for quality improvement in retinal images. Finally, an algorithm was developed to isolate in retinal images the regions affected by artifacts, and to subtract from the images the additive contributions to the distortion.;The artifact clean-up has been shown to increase the textural information of the retinal images, by improving vessel segmentation by more than 10%. By avoiding the use of interpretative elements of the image, this improvement in the quality of retinal images is agnostic to specific disease processes, and thus potentially applicable to population screening. Further work is necessary to improve the cosmetic quality of the images, to optimise the artifact removal strategy, and to relate the feature extraction improvement to clinical performance

    The use of datasets of bad quality images to define fundus image quality

    Get PDF
    Abstract—Screening programs for sight-threatening diseases rely on the grading of a large number of digital retinal images. As automatic image grading technology evolves, there emerges a need to provide a rigorous definition of image quality with reference to the grading task. In this work, on two subsets of the CORD database of clinically gradable and matching non-gradable digital retinal images, a feature set based on statistical and on task-specific morphological features has been identified. A machine learning technique has then been demonstrated to classify the images as per their clinical gradeability, offering a proxy for a rigorous definition of image quality. Clinical Relevance— This work offers a novel strategy to define fundus image quality, to contribute to the development of automatic fundus image graders for retinal screening
    corecore